171 research outputs found
Lecture 08: Partial Eigen Decomposition of Large Symmetric Matrices via Thick-Restart Lanczos with Explicit External Deflation and its Communication-Avoiding Variant
There are continual and compelling needs for computing many eigenpairs of very large Hermitian matrix in physical simulations and data analysis. Though the Lanczos method is effective for computing a few eigenvalues, it can be expensive for computing a large number of eigenvalues. To improve the performance of the Lanczos method, in this talk, we will present a combination of explicit external deflation (EED) with an s-step variant of thick-restart Lanczos (s-step TRLan). The s-step Lanczos method can achieve an order of s reduction in data movement while the EED enables to compute eigenpairs in batches along with a number of other advantages
Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations
The iterative diagonalization of a sequence of large ill-conditioned
generalized eigenvalue problems is a computational bottleneck in quantum
mechanical methods employing a nonorthogonal basis for {\em ab initio}
electronic structure calculations. We propose a hybrid preconditioning scheme
to effectively combine global and locally accelerated preconditioners for rapid
iterative diagonalization of such eigenvalue problems. In partition-of-unity
finite-element (PUFE) pseudopotential density-functional calculations,
employing a nonorthogonal basis, we show that the hybrid preconditioned block
steepest descent method is a cost-effective eigensolver, outperforming current
state-of-the-art global preconditioning schemes, and comparably efficient for
the ill-conditioned generalized eigenvalue problems produced by PUFE as the
locally optimal block preconditioned conjugate-gradient method for the
well-conditioned standard eigenvalue problems produced by planewave methods
A Bi-level Nonlinear Eigenvector Algorithm for Wasserstein Discriminant Analysis
Much like the classical Fisher linear discriminant analysis, Wasserstein
discriminant analysis (WDA) is a supervised linear dimensionality reduction
method that seeks a projection matrix to maximize the dispersion of different
data classes and minimize the dispersion of same data classes. However, in
contrast, WDA can account for both global and local inter-connections between
data classes using a regularized Wasserstein distance. WDA is formulated as a
bi-level nonlinear trace ratio optimization. In this paper, we present a
bi-level nonlinear eigenvector (NEPv) algorithm, called WDA-nepv. The inner
kernel of WDA-nepv for computing the optimal transport matrix of the
regularized Wasserstein distance is formulated as an NEPv, and meanwhile the
outer kernel for the trace ratio optimization is also formulated as another
NEPv. Consequently, both kernels can be computed efficiently via
self-consistent-field iterations and modern solvers for linear eigenvalue
problems. Comparing with the existing algorithms for WDA, WDA-nepv is
derivative-free and surrogate-model-free. The computational efficiency and
applications in classification accuracy of WDA-nepv are demonstrated using
synthetic and real-life datasets
2D Eigenvalue Problems I: Existence and Number of Solutions
A two dimensional eigenvalue problem (2DEVP) of a Hermitian matrix pair is introduced in this paper. The 2DEVP can be viewed as a linear algebraic
formulation of the well-known eigenvalue optimization problem of the parameter
matrix . We present fundamental properties of the 2DEVP
such as the existence, the necessary and sufficient condition for the finite
number of 2D-eigenvalues and variational characterizations. We use eigenvalue
optimization problems from the quadratic constrained quadratic program and the
computation of distance to instability to show their connections with the 2DEVP
and new insights of these problems derived from the properties of the 2DEVP.Comment: 24 pages, 5 figure
- …